This is just an example of what preregRS is capable of, not a good-practice example for a preregistration!
I pasted together information from several preregistrations, so the information in this file may not necessarily fit together very well.




This is a Rmd-template for protocols and reporting of systematic reviews and meta-analyses. It synthesizes three sources of standards:

The template is aimed at

  • guiding the process of planning systematic review/meta-analyses
  • providing a form for preregistration (enter your text, export as standalone HTML or PDF, upload as preregistration to any preferred repository)

We are aware that MARS targets aspects of reporting after the systematic review/meta-analysis is completed rather than decisions and reasoning in the planning phase as PRISMA-P and PROSPERO. MARS nevertheless provides a good framework to determine crucial points for systematic reviews/ meta-analyses to be addressed as early as in the planning phase.

Standards have been partially adapted. Click ‘show changes’ to see changes and reasons for the change.

Standard Implemented change Reason
MARS Left out the paper section “Abstract” An abstract is important for reporting, however not, for planning and registering.
MARS Left out the paper section “Results” and parts of “Discussion” Specifications on how to report results is important for reporting, however not, for planning and registering. Prospective information on how results will be computed/ synthesized is preserved.
MARS Left out “Protocol: List where the full protocol can be found” This form serves as a preregistration form, a protocol will be generated later in the research process.
MARS Left out “Give the place where the synthesis is registered and its registry number, if registered” This template serves as a preregistration form.
PROSPERO Left out non-mandatory fields or integrated them into the mandatory fields. Avoiding too detailed specifications. All relevant information will be integrated.
PROSPERO Left out some options in “Type and method of review” Options left out are purely health/ medicine related.
PROSPERO Left out “Health area of the review” This field is purely health/ medicine related.
PRISMA-P Left out “If registered, provide the name of the registry (such as PROSPERO) and registration number.” This template serves as a preregistration form.




1 General

1.1 Working Title

# avoiding markdown tables because they're not exactly the prettiest flower in the bunch
# set up the table
table_sources <- data.frame(Source = c("PRISMA-P",     # first column will be always the same
                                       "PROSPERO", 
                                       "MARS"),
                            Description = c(# PRISMA-P
                                            "Identify the report as a protocol of a systematic review. If the protocol is for an update of a previous systematic review, identify as such.",
                                            # PROSPERO
                                            "Give the working title of the review, for example the one used for obtaining funding. Ideally the title should state succinctly the interventions or exposures being reviewed and the associated health or social problems. Where appropriate, the title should use the PI(E)COS structure to contain information on the Participants, Intervention (or Exposure) and Comparison groups, the Outcomes to be measured and Study designs to be included. For reviews in languages other than English, this field should be used to enter the title in the language of the review. This will be displayed together with the English language title.",
                                            # MARS
                                            "Title: State the research question and type of research synthesis (e.g., narrative synthesis, meta-analysis)."))

# produce table
knitr::kable(table_sources) %>%
    kable_styling(fixed_thead = T, full_width = T) %>%
    column_spec(1, bold = T) %>%
    row_spec(0, background = "#ececec")
Source Description
PRISMA-P Identify the report as a protocol of a systematic review. If the protocol is for an update of a previous systematic review, identify as such.
PROSPERO Give the working title of the review, for example the one used for obtaining funding. Ideally the title should state succinctly the interventions or exposures being reviewed and the associated health or social problems. Where appropriate, the title should use the PI(E)COS structure to contain information on the Participants, Intervention (or Exposure) and Comparison groups, the Outcomes to be measured and Study designs to be included. For reviews in languages other than English, this field should be used to enter the title in the language of the review. This will be displayed together with the English language title.
MARS Title: State the research question and type of research synthesis (e.g., narrative synthesis, meta-analysis).

Example of a preregistration with preregRS

1.2 Type of Review

# avoiding markdown tables because they're not exactly the prettiest flower in the bunch
# set up the table
table_sources <- data.frame(Source = c("PRISMA-P",     # first column will be always the same
                                       "PROSPERO", 
                                       "MARS"),
                            Description = c(# PRISMA-P
                                            "Not specified.",
                                            # PROSPERO
                                            "Type and method of review: Select the type of review and the review method from the lists below. Select the health area(s) of interest for your review.

* Meta-analysis
* Narrative synthesis
* Network meta-analysis
* Review of reviews
* Synthesis of qualitative studies
* Systematic review
* Other",
                                            # MARS
                                            "Not specified."))

# produce table
knitr::kable(table_sources) %>%
    kable_styling(fixed_thead = T, full_width = T) %>%
    column_spec(1, bold = T) %>%
    row_spec(0, background = "#ececec")
Source Description
PRISMA-P Not specified.
PROSPERO

Type and method of review: Select the type of review and the review method from the lists below. Select the health area(s) of interest for your review.

  • Meta-analysis
  • Narrative synthesis
  • Network meta-analysis
  • Review of reviews
  • Synthesis of qualitative studies
  • Systematic review
  • Other
MARS Not specified.

Meta-analysis

1.3 Anticipated Start and Completion Date

# avoiding markdown tables because they're not exactly the prettiest flower in the bunch
# set up the table
table_sources <- data.frame(Source = c("PRISMA-P",     # first column will be always the same
                                       "PROSPERO", 
                                       "MARS"),
                            Description = c(# PRISMA-P
                                            "Not specified.",
                                            # PROSPERO
                                            "Give the date when the systematic review commenced, or is expected to commence. Give the date by which the review is expected to be completed.",
                                            # MARS
                                            "Not specified."))

# produce table
knitr::kable(table_sources) %>%
    kable_styling(fixed_thead = T, full_width = T) %>%
    column_spec(1, bold = T) %>%
    row_spec(0, background = "#ececec")
Source Description
PRISMA-P Not specified.
PROSPERO Give the date when the systematic review commenced, or is expected to commence. Give the date by which the review is expected to be completed.
MARS Not specified.

May 2021 to Oct 2022

1.4 Stage of Synthesis

# avoiding markdown tables because they're not exactly the prettiest flower in the bunch
# set up the table
table_sources <- data.frame(Source = c("PRISMA-P",     # first column will be always the same
                                       "PROSPERO", 
                                       "MARS"),
                            Description = c(# PRISMA-P
                                            "Not specified.",
                                            # PROSPERO
                                            "Indicate the stage of progress of the review by ticking the relevant Started and Completed boxes. Additional
information may be added in the free text box provided.
Please note: Reviews that have progressed beyond the point of completing data extraction at the time of
initial registration are not eligible for inclusion in PROSPERO. Should evidence of incorrect status and/or
completion date being supplied at the time of submission come to light, the content of the PROSPERO
record will be removed leaving only the title and named contact details and a statement that inaccuracies in
the stage of the review date had been identified.
This field should be updated when any amendments are made to a published record and on completion and
publication of the review. If this field was pre-populated from the initial screening questions then you are not
able to edit it until the record is published.

* The review has not yet started: [yes/no]

| Review stage | Started | Completed |
|:--------------------------------------------| :----:| :----:|
| Preliminary searches | Yes/No | Yes/No
| Piloting of the study selection process | Yes/No | Yes/No
| Formal screening of search results against eligibility criteria | Yes/No | Yes/No
| Data extraction | Yes/No | Yes/No
| Risk of bias (quality) assessment | Yes/No | Yes/No
| Data analysis | Yes/No | Yes/No

Provide any other relevant information about the stage of the review here (e.g. Funded proposal, protocol not
yet finalised).",
                                            # MARS
                                            "Not specified."))

# produce table
knitr::kable(table_sources, escape = F) %>%
    kable_styling(fixed_thead = T, full_width = T) %>%
    column_spec(1, bold = T) %>%
    row_spec(0, background = "#ececec")
Source Description
PRISMA-P Not specified.
PROSPERO

Indicate the stage of progress of the review by ticking the relevant Started and Completed boxes. Additional information may be added in the free text box provided. Please note: Reviews that have progressed beyond the point of completing data extraction at the time of initial registration are not eligible for inclusion in PROSPERO. Should evidence of incorrect status and/or completion date being supplied at the time of submission come to light, the content of the PROSPERO record will be removed leaving only the title and named contact details and a statement that inaccuracies in the stage of the review date had been identified. This field should be updated when any amendments are made to a published record and on completion and publication of the review. If this field was pre-populated from the initial screening questions then you are not able to edit it until the record is published.

  • The review has not yet started: [yes/no]
Review stage Started Completed
Preliminary searches Yes/No Yes/No
Piloting of the study selection process Yes/No Yes/No
Formal screening of search results against eligibility criteria Yes/No Yes/No
Data extraction Yes/No Yes/No
Risk of bias (quality) assessment Yes/No Yes/No
Data analysis Yes/No Yes/No
Provide any other relevant information about the stage of the review here (e.g. Funded proposal, protocol not yet finalised).
MARS Not specified.
Review stage Started Completed
Preliminary searches Yes No
Piloting of the study selection process No No
Formal screening of search results against eligibility criteria No No
Data extraction No No
Risk of bias (quality) assessment No No
Data analysis No No

1.5 Names, Affiliations, Contact

# avoiding markdown tables because they're not exactly the prettiest flower in the bunch
# set up the table
table_sources <- data.frame(Source = c("PRISMA-P",     # first column will be always the same
                                       "PROSPERO", 
                                       "MARS"),
                            Description = c(# PRISMA-P
                                            "
* Provide name, institutional affiliation, e-mail address of all protocol authors; provide physical mailing address of corresponding author.
* Describe contributions of protocol authors and identify the guarantor of the review.",
                                            # PROSPERO
                                            "
* Named Contact: The named contact acts as the guarantor for the accuracy of the information presented in the register record.
* Named contact email: Give the electronic mail address of the named contact.
* Organisational affiliation of the review: Full title of the organisational affiliations for this review and website address if available. This field may be completed as 'None' if the review is not affiliated to any organisation.
* Review team members and their organisational affiliations: Give the personal details and the organisational affiliations of each member of the review team. Affiliation refers to groups or organisations to which review team members belong.",
                                            # MARS
                                            "Not specified."))

# produce table
knitr::kable(table_sources) %>%
    kable_styling(fixed_thead = T, full_width = T) %>%
    column_spec(1, bold = T) %>%
    row_spec(0, background = "#ececec")
Source Description
PRISMA-P
  • Provide name, institutional affiliation, e-mail address of all protocol authors; provide physical mailing address of corresponding author.
  • Describe contributions of protocol authors and identify the guarantor of the review.
PROSPERO
  • Named Contact: The named contact acts as the guarantor for the accuracy of the information presented in the register record.
  • Named contact email: Give the electronic mail address of the named contact.
  • Organisational affiliation of the review: Full title of the organisational affiliations for this review and website address if available. This field may be completed as ‘None’ if the review is not affiliated to any organisation.
  • Review team members and their organisational affiliations: Give the personal details and the organisational affiliations of each member of the review team. Affiliation refers to groups or organisations to which review team members belong.
  • MARS Not specified.

    Jürgen Schneider
    University of Tübingen

    https://uni-tuebingen.de/de/175743

    1.6 Collaborators

    # avoiding markdown tables because they're not exactly the prettiest flower in the bunch
    # set up the table
    table_sources <- data.frame(Source = c("PRISMA-P",     # first column will be always the same
                                           "PROSPERO", 
                                           "MARS"),
                                Description = c(# PRISMA-P
                                                "Not specified.",
                                                # PROSPERO
                                                "Collaborators (name & affilitation) of individuals working on the review, but are not review team member.",
                                                # MARS
                                                "Not specified."))
    # produce table
    knitr::kable(table_sources) %>%
        kable_styling(fixed_thead = T, full_width = T) %>%
        column_spec(1, bold = T) %>%
        row_spec(0, background = "#ececec")
    Source Description
    PRISMA-P Not specified.
    PROSPERO Collaborators (name & affilitation) of individuals working on the review, but are not review team member.
    MARS Not specified.
    • Iris Backfisch (University of Tübingen)

    1.7 Amendments to previous versions

    # avoiding markdown tables because they're not exactly the prettiest flower in the bunch
    # set up the table
    table_sources <- data.frame(Source = c("PRISMA-P",     # first column will be always the same
                                           "PROSPERO", 
                                           "MARS"),
                                Description = c(# PRISMA-P
                                                "If the protocol represents an amendment of a previously completed or published protocol, identify as such and list changes; otherwise, state plan for documenting important protocol amendments.",
                                                # PROSPERO
                                                "Not specified.",
                                                # MARS
                                                "Not specified."))
    
    # produce table
    knitr::kable(table_sources) %>%
        kable_styling(fixed_thead = T, full_width = T) %>%
        column_spec(1, bold = T) %>%
        row_spec(0, background = "#ececec")
    Source Description
    PRISMA-P If the protocol represents an amendment of a previously completed or published protocol, identify as such and list changes; otherwise, state plan for documenting important protocol amendments.
    PROSPERO Not specified.
    MARS Not specified.

    None.

    1.8 Funding Sources, Sponsors and Their Roles

    # avoiding markdown tables because they're not exactly the prettiest flower in the bunch
    # set up the table
    table_sources <- data.frame(Source = c("PRISMA-P",     # first column will be always the same
                                           "PROSPERO", 
                                           "MARS"),
                                Description = c(# PRISMA-P
                                                "
    
    * Indicate sources of financial or other support for the review.
    * Provide name for the review funder and/or sponsor.
    * Describe roles of funder(s), sponsor(s), and/or institution(s), if any, in developing the protocol.
    
    ",
                                                # PROSPERO
                                                "Funding sources/sponsors: Give details of the individuals, organizations, groups or other legal entities who take responsibility for initiating, managing, sponsoring and/or financing the review. Include any unique identification numbers assigned to the review by the individuals or bodies listed. If available, provide grant number(s).",
                                                # MARS
                                                "
    
    * List all sources of monetary and in-kind funding support.  
    * State the role of funders in conducting the synthesis and deciding to publish the results, if any.  
    
    "))
    
    # produce table
    knitr::kable(table_sources) %>%
        kable_styling(fixed_thead = T, full_width = T) %>%
        column_spec(1, bold = T) %>%
        row_spec(0, background = "#ececec")
    Source Description
    PRISMA-P
    • Indicate sources of financial or other support for the review.
    • Provide name for the review funder and/or sponsor.
    • Describe roles of funder(s), sponsor(s), and/or institution(s), if any, in developing the protocol.
    PROSPERO Funding sources/sponsors: Give details of the individuals, organizations, groups or other legal entities who take responsibility for initiating, managing, sponsoring and/or financing the review. Include any unique identification numbers assigned to the review by the individuals or bodies listed. If available, provide grant number(s).
    MARS
    • List all sources of monetary and in-kind funding support.
  • State the role of funders in conducting the synthesis and deciding to publish the results, if any.
  • Supported by the Federal Ministry of Education and Research, Germany.

    1.9 Conflict of Interest

    # avoiding markdown tables because they're not exactly the prettiest flower in the bunch
    # set up the table
    table_sources <- data.frame(Source = c("PRISMA-P",     # first column will be always the same
                                           "PROSPERO", 
                                           "MARS"),
                                Description = c(# PRISMA-P
                                                "Not specified.",
                                                # PROSPERO
                                                "List any conditions that could lead to actual or perceived undue influence on judgements concerning the main topic investigated in the review.",
                                                # MARS
                                                "Describe possible conflicts of interest, including financial and other nonfinancial interests."))
    
    # produce table
    knitr::kable(table_sources) %>%
        kable_styling(fixed_thead = T, full_width = T) %>%
        column_spec(1, bold = T) %>%
        row_spec(0, background = "#ececec")
    Source Description
    PRISMA-P Not specified.
    PROSPERO List any conditions that could lead to actual or perceived undue influence on judgements concerning the main topic investigated in the review.
    MARS Describe possible conflicts of interest, including financial and other nonfinancial interests.

    None.

    2 Introduction

    2.1 Rationale

    # avoiding markdown tables because they're not exactly the prettiest flower in the bunch
    # set up the table
    table_sources <- data.frame(Source = c("PRISMA-P",     # first column will be always the same
                                           "PROSPERO", 
                                           "MARS"),
                                Description = c(# PRISMA-P
                                                "Describe the rationale for the review in the context of what is already known.",
                                                # PROSPERO
                                                "Not specified.",
                                                # MARS
                                                "Problem: State the question or relation(s) under investigation, including
    
    * Historical background, including previous syntheses and meta-analyses related to the topic 
    * Theoretical, policy, and/or practical issues related to the question or relation(s) of interest
    * Populations and settings to which the question or relation(s) is relevant
    * Rationale for
       (a) choice of study designs, 
       (b) the selection and coding of outcomes, 
       (c) the selection and coding potential moderators or mediators of results 
    * Psychometric characteristics of outcome measures and other variables
    "))
    
    # produce table
    knitr::kable(table_sources) %>%
        kable_styling(fixed_thead = T, full_width = T) %>%
        column_spec(1, bold = T) %>%
        row_spec(0, background = "#ececec")
    Source Description
    PRISMA-P Describe the rationale for the review in the context of what is already known.
    PROSPERO Not specified.
    MARS

    Problem: State the question or relation(s) under investigation, including

    • Historical background, including previous syntheses and meta-analyses related to the topic
    • Theoretical, policy, and/or practical issues related to the question or relation(s) of interest
    • Populations and settings to which the question or relation(s) is relevant
    • Rationale for
      1. choice of study designs,
      2. the selection and coding of outcomes,
      3. the selection and coding potential moderators or mediators of results
    • Psychometric characteristics of outcome measures and other variables

    Recent research provided evidence that teachers’ professional knowledge regarding the adoption of educational technologies is a central determinant for successful teaching with technologies (Petko, 2012). One prominent conceptualization of teachers’ professional knowledge for teaching with technology is the technological-pedagogical-content-knowledge (TPACK) framework established by Mishra and Koehler (2006). Based on this framework, TPACK encompasses three generic knowledge components (technological knowledge TK, pedagogical knowledge PK, content knowledge CK), three intersections of these knowledge components (technological-pedagogical knowledge TPK, technological-content knowledge TCK, pedagogical-content knowledge PCK) and TPACK as an integrated knowledge component “that represents a class of knowledge that is central to teachers’ work with technology” (p. 1028, Mishra & Koehler, 2006), see Figure 1.


    Figure 1. TPACK Model (Mishra & Koehler, 2006; © 2012 by tpack.org)

    The most prominent questionnaire to assess TPACK is the self-report questionnaire by Schmidt et al. (2009). This questionnaire encompasses items on the different knowledge components which ask teachers to self-evaluate their confidence to fulfill a task (e.g., an item for TCK is “I know about technologies that I can use for understanding and doing mathematics”, and for TPK is “I can choose technologies that enhance the teaching approaches for a lesson.”). Recently, researchers claim that this self-report questionnaire (and the different extensions and adaptions thereof) rather tap into teachers’ self-efficacy beliefs about teaching with technology than their available knowledge (Scherer et al., 2018; Lachner et al., 2019). Based on the conceptualization of self-efficacy beliefs as the subjective perception of one’s own capability to solve a task (Bandura, 2006), researchers recently argue that the self-report TPACK might be highly intertwined with teachers’ self-efficacy beliefs. Related research suggests that the use of self-report TPACK might be challenging during interpreting the results of empirical studies (Abbitt, 2011; Joo, Kim, & Li, 2018; Fabriz et al., 2020). Therefore, the use of self-report TPACK might induce a jingle-jangle fallacy (Gonzalez et al., 2020; Kelley, 1927). Jingle-jangle fallacies describe a lack of extrinsic convergent validity in two different ways: On the one hand two measures which are labeled the same might represent two conceptually different constructs (jingle fallacy). In the present case, self-report TPACK might differ from teachers’ knowledge for technology-enhanced teaching to a larger extent than previous research suggests. On the other hand two measures which are labeled differently might examine the same construct (jangle fallacy). Accordingly, self-report TPACK and self-efficacy beliefs towards technology-enhanced teaching might be similar constructs with comparable implications on teachers’ technology integration (see e.g., Marsh et al., 2020 for an investigation of jingle-jangle fallacies). However, to date a systematic analysis of the problematic validity of self-reported TPACK is missing.

    Against this background, three complementary approaches will be applied in this paper to examine the validity of self-reported TPACK.

    First, we will meta-analytically analyze, how the different knowledge components of the TPACK model (i.e., TK, CK, PK, TCK, TPK, PCK) are related to each other across studies when examined with self-report TPACK questionnaires. Measures that depict TPACK components that are more proximal to each other or are intersections of each other in the model should show higher correlations than more distal measures or measure that don’t intersect (e.g., TK should correlate higher with TCK than with PCK).

    Second, potential jingle fallacies of self-report TPACK and teachers’ knowledge for technology-enhanced teaching will be examined. Hence, it will be investigated if the two measures represent the same concepts as proposed by researchers (e.g., Schmidt et al., 2009). Therefore, the extent to which self-reported TPACK and more objective measures of teachers’ knowledge for technology-enhanced teaching are related to each other will be examined. If, as proposed, self-report TPACK represents teachers’ knowledge for technology-enhanced teaching, self-report TPACK should be highly related to the quality of technolog use for teaching (see e.g., Kunter et al., 2013; Ericsson, 2006 for the importance of teacher knowledge for generic teaching quality). To investigate the relationship of self-reported TPACK and performance-based measures of teachers’ knowledge for technology-enhanced taching, empirical studies that examine these measures such as studies that investigate the role of self-reported TPACK for the quality of technology-enhanced lesson planning (e.g., Backfisch et al., 2020; Kopcha et al., 2014) or test-based approaches (e.g., Akyuz, 2018; Krauskopf & Forssell, 2013; So & Kim, 2009) will be reviewed.

    Third, potential jangle fallacies of self-report TPACK and self-efficacy beliefs towards technology-enhanced teaching will be examined. Therefore, the magnitude of the correlations of self-reported TPACK and self-efficacy beliefs towards technology-enhanced teaching will be compared. Futhermore, the extent to which both measures are related to teachers’ technology integration (e.g., frequency of technology integration) will be analyzed. If self-report TPACK and self-efficacy beliefs are related to the same magnitude to outcome variables, both measures might represent the conceptually similar construct.

    References

    Abbitt, J. T. (2011). Measuring technological pedagogical content knowledge in preservice teacher education: A review of current methods and instruments. Journal of Research on Technology in Education, 43(4), 281–300. https://doi.org/10.1080/15391523.2011.10782573

    Akyuz, D. (2018). Measuring technological pedagogical content knowledge (TPACK) through performance assessment. Computers & Education, 125, 212–225. https://doi.org/10.1016/j.compedu.2018.06.012

    Backfisch, I., Lachner, A., Hische, C., Loose, F., & Scheiter, K. (2020). Professional knowledge or motivation? Investigating the role of teachers’ expertise on the quality of technology-enhanced lesson plans. Learning & Instruction, 66, 101300. https://doi.org/10.1016/j.learninstruc.2019.101300

    Bandura, A. (2006). Guide for constructing self-efficacy scales. In Self-efficacy beliefs of adolescents (pp. 307–337). https://doi.org/10.1017/CBO9781107415324.004

    Ericsson, K. A. (2006). The influence of experience and deliberate practice on the development of superior expert performance. The Cambridge Handbook of Expertise and Expert Performance, 38, 685-705.

    Fabriz, S., Hansen, M., Heckmann, C., Mordel, J., Mendzheritskaya, J., Stehle, S., Schulze-Vorberg, L., Ulrich, I., & Horz, H. (2020). How a professional development programme for university teachers impacts their teaching-related self-efficacy, self-concept, and subjective knowledge. Higher Education Research & Development, 1–15.

    Gonzalez, O., MacKinnon, D. P., & Muniz, F. B. (2020). Extrinsic Convergent Validity Evidence to Prevent Jingle and Jangle Fallacies. Multivariate Behavioral Research, 1–17.
    Joo, Y. J., Park, S., & Lim, E. (2018). Factors influencing preservice teachers’ intention to use technology: TPACK, teacher self-efficacy, and Technology Acceptance Model. Educational Technology and Society, 21(3), 48–59.

    Kelley, T. L. (1927). Interpretation of educational measurements.

    Kopcha, T. J., Ottenbreit-Leftwich, A., Jung, J., & Baser, D. (2014). Examining the TPACK framework through the convergent and discriminant validity of two measures. Computers & Education, 78, 87–96. https://doi.org/10.1016/j.compedu.2014.05.003

    Krauskopf, K., & Forssell, K. (2013). I have TPCK! – What does that mean? Examining the External Validity of TPCK Self-Reports. Proceedings of Society for Information Technology & Teacher Education International Conference 2013, 2190–2197. http://www.stanford.edu/~forssell/papers/SITE2013_TPCK_validity.pdf

    Kunter, M., Klusmann, U., Baumert, J., Richter, D., Voss, T., & Hachfeld, A. (2013). Professional competence of teachers: Effects on instructional quality and student development. Journal of Educational Psychology, 105, 805–820. https://doi.org/10.1037/a0032583

    Lachner, A., Backfisch, I., & Stürmer, K. (2019). A test-based approach of Modeling and Measuring Technological Pedagogical Knowledge. Computers & Education, 103645. https://doi.org/10.1016/j.compedu.2019.103645

    Marsh, H. W., Pekrun, R., Parker, P. D., Murayama, K., Guo, J., Dicke, T., & Arens, A. K. (2019). The murky distinction between self-concept and self-efficacy: Beware of lurking jingle-jangle fallacies. Journal of Educational Psychology, 111(2), 331.

    Mishra, P., & Koehler, M. J. (2006). Technological pedagogical content knowledge: A framework for integrating technology in teacher knowledge. Teachers College Record, 108(6), 1017-1054.

    Petko, D. (2012). Teachers’ pedagogical beliefs and their use of digital media in classrooms: Sharpening the focus of the “will, skill, tool” model and integrating teachers’ constructivist orientations. Computers & Education, 58, 1351–1359. https://doi.org/10.1016/j.compedu.2011.12.013

    Scherer, R., Tondeur, J., & Siddiq, F. (2017). On the quest for validity: Testing the factor structure and measurement invariance of the technology-dimensions in the Technological, Pedagogical, and Content Knowledge (TPACK) model. Computers & Education, 112, 1-17. https://doi.org/10.1016/j.compedu.2017.04.012

    Schmidt, D. A., Baran, E., Thompson, A. D., Mishra, P., Koehler, M. J., & Shin, T. S. (2009). Technological pedagogical content knowledge (TPACK) the development and validation of an assessment instrument for preservice teachers. Journal of Research on Technology in Education, 42(2), 123-149. https://doi.org/10.1080/15391523.2009.10782544

    So, H. J., & Kim, B. (2009). Learning about problem based learning: Student teachers integrating technology, pedagogy and content knowledge. Australasian Journal of Educational Technology, 25(1), 101–116. https://doi.org/10.14742/ajet.1183

    2.2 Research Questions

    # avoiding markdown tables because they're not exactly the prettiest flower in the bunch
    # set up the table
    table_sources <- data.frame(Source = c("PRISMA-P",     # first column will be always the same
                                           "PROSPERO", 
                                           "MARS"),
                                Description = c(# PRISMA-P
                                                "Provide an explicit statement of the question(s) the review will address with reference to participants, interventions, comparators, and outcomes (PICO)",
                                                # PROSPERO
                                                "State the question(s) to be addressed by the review, clearly and precisely. Review questions may be specific or broad. It may be appropriate to break very broad questions down into a series of related more specific questions. Questions may be framed or refined using PI(E)COS where relevant.",
                                                # MARS
                                                "Objectives: State the hypotheses examined, indicating which were prespecified, including
    
    * Question in terms of relevant participant characteristics (including animal populations), independent variables (experimental manipulations, treatments, or interventions), ruling out of possible confounding variables, dependent variables (outcomes, criterion), and other features of study designs
    * Method(s) of synthesis and if meta-analysis was used, the specific methods used to integrate studies (e.g., effect-size metric, averaging method, the model used in homogeneity analysis)"))
    
    # produce table
    knitr::kable(table_sources) %>%
        kable_styling(fixed_thead = T, full_width = T) %>%
        column_spec(1, bold = T) %>%
        row_spec(0, background = "#ececec")
    Source Description
    PRISMA-P Provide an explicit statement of the question(s) the review will address with reference to participants, interventions, comparators, and outcomes (PICO)
    PROSPERO State the question(s) to be addressed by the review, clearly and precisely. Review questions may be specific or broad. It may be appropriate to break very broad questions down into a series of related more specific questions. Questions may be framed or refined using PI(E)COS where relevant.
    MARS

    Objectives: State the hypotheses examined, indicating which were prespecified, including

    • Question in terms of relevant participant characteristics (including animal populations), independent variables (experimental manipulations, treatments, or interventions), ruling out of possible confounding variables, dependent variables (outcomes, criterion), and other features of study designs
    • Method(s) of synthesis and if meta-analysis was used, the specific methods used to integrate studies (e.g., effect-size metric, averaging method, the model used in homogeneity analysis)
    • RQ1: How are the different knowledge components of the TPACK model related to each other, when examined with self-report TPACK questionnaires?
    • RQ2: Does the use of self-reported TPACK questionnaire constitute a jingle fallacy with teachers’ knowledge for technology-enhanced teaching?
      • RQ2a: To what extent is self-reported TPACK related to performance-based measures of knowledge for technology-enhanced teaching?
    • RQ3: Does the use of self-reported TPACK questionnaire constitute a jangle fallacy with self-efficacy beliefs?
      • RQ3a: To what extent is self-reported TPACK related to self-efficacy beliefs?
      • RQ3b: To what extent are self-reported TPACK and self-efficacy beliefs differently related to self-reported technology integration?

    3 Methods

    3.1 Eligibility: Inclusion and Exclusion Criteria

    # avoiding markdown tables because they're not exactly the prettiest flower in the bunch
    # set up the table
    table_sources <- data.frame(Source = c("PRISMA-P",     # first column will be always the same
                                           "PROSPERO", 
                                           "MARS",
                                           "Added by authors"),
                                Description = c(# PRISMA-P
                                                "Specify the study characteristics (such as PICO, study design, setting, time frame) and report characteristics (such as years considered, language, publication status) to be used as criteria for eligibility for the review.",
                                                # PROSPERO
                                                "Give details of the types of study (study designs) eligible for inclusion in the review. If there are no restrictions on the types of study design eligible for inclusion, or certain study types are excluded, this should be stated. The preferred format includes details of both inclusion and exclusion criteria.",
                                                # MARS
                                                "Describe the criteria for selecting studies, including
    
    * Independent variables (e.g., experimental manipulations, types of treatments or interventions or predictor variables).
    * Dependent variable (e.g., outcomes, in syntheses of clinical research including both potential benefits and potential adverse effects).
    * Eligible study designs (e.g., methods of sampling or treatment assignment).
    * Handling of multiple reports about the same study or sample, describing which are primary and handling of multiple measures using the same participants.
    * Restrictions on study inclusion (e.g., by study age, language, location, or report type).
    * Changes to the prespecified inclusion and exclusion criteria, and when these changes were made.
    * Handling of reports that did not contain sufficient information to judge eligibility (e.g., lacking information about study design) and reports that did not include sufficient information for analysis (e.g., did not report numerical data about those outcomes).",
                                                # added by authors
                                                "Alternative approaches (to PICO) to describe study characteristics:
    
    * SPIDER: relevant, when including qualitative research (https://doi.org/10.1177/1049732312452938)
    * PICOS: Compared to PICO includes study design and reaches higher specifity (ISBN: 978-1-900640-47-3; https://www.york.ac.uk/media/crd/Systematic_Reviews.pdf)
    * UTOS: Cronbach's classical framework (ISBN: 978-0875895253)"))
    
    # produce table
    knitr::kable(table_sources) %>%
        kable_styling(fixed_thead = T, full_width = T) %>%
        column_spec(1, bold = T) %>%
        row_spec(0, background = "#ececec")
    Source Description
    PRISMA-P Specify the study characteristics (such as PICO, study design, setting, time frame) and report characteristics (such as years considered, language, publication status) to be used as criteria for eligibility for the review.
    PROSPERO Give details of the types of study (study designs) eligible for inclusion in the review. If there are no restrictions on the types of study design eligible for inclusion, or certain study types are excluded, this should be stated. The preferred format includes details of both inclusion and exclusion criteria.
    MARS

    Describe the criteria for selecting studies, including

    • Independent variables (e.g., experimental manipulations, types of treatments or interventions or predictor variables).

    • Dependent variable (e.g., outcomes, in syntheses of clinical research including both potential benefits and potential adverse effects).

    • Eligible study designs (e.g., methods of sampling or treatment assignment).

    • Handling of multiple reports about the same study or sample, describing which are primary and handling of multiple measures using the same participants.

    • Restrictions on study inclusion (e.g., by study age, language, location, or report type).

    • Changes to the prespecified inclusion and exclusion criteria, and when these changes were made.

    • Handling of reports that did not contain sufficient information to judge eligibility (e.g., lacking information about study design) and reports that did not include sufficient information for analysis (e.g., did not report numerical data about those outcomes).

    Added by authors

    Alternative approaches (to PICO) to describe study characteristics:

  • SPIDER: relevant, when including qualitative research (https://doi.org/10.1177/1049732312452938)

  • PICOS: Compared to PICO includes study design and reaches higher specifity (ISBN: 978-1-900640-47-3; https://www.york.ac.uk/media/crd/Systematic_Reviews.pdf)

  • UTOS: Cronbach’s classical framework (ISBN: 978-0875895253)

  • 3.1.1 Inclusion criteria

    PICOS

    • Population:
      • formal learning context (planned as a learning activity to improve writing skills)
      • L1 (first language)
      • years: 2003 and younger (rationale see ‘Search Strategy’)
    • Intervention:
      • system/ computer generated feedback
      • formative feedback
      • immediate feedback
      • targeted at improving writing
    • Comparison/ Control:
      • none
    • Outcome: improvement of text quality
      • cohesion (organization)
      • comprehension/ comprehensability
      • lexical measures
      • sentence structure (variety of sentences)
      • overall development (awareness of purpose, task, and audience)
      • (different!) overall quality scores
      • specifity (Problems, solutions, locatization)
    • Study Type
      • empirical (quantitative and qualitative)

    3.1.2 Exclusion criteria

    PICOS

    • Population:
      • no formal learning context (e.g. novel writing; diary writing)
      • not in the context of writing
      • L2 learning (second language learning)
    • Intervention:
      • peer feedback
      • summative feedback
      • delayed feedback
      • teacher feedback using computer tools (mediated feedback)
      • targeted at improving content learning
    • Comparison/ Control:
      • not empirical
    • Outcome:
      • does not focus text quality
      • conceptual content knowledge
    • Study Type
      • purely conceptual

    3.2 Sources of Search: List and Rationale

    # avoiding markdown tables because they're not exactly the prettiest flower in the bunch
    # set up the table
    table_sources <- data.frame(Source = c("PRISMA-P",     # first column will be always the same
                                           "PROSPERO", 
                                           "MARS"),
                                Description = c(# PRISMA-P
                                                "Not specified.",
                                                # PROSPERO
                                                "Searches: State the sources that will be searched. Give the search dates, and any restrictions (e.g. language or publication period). Do NOT enter the full search strategy (it may be provided as a link or attachment).",
                                                # MARS
                                                "Describe all information sources:
    
    * Databases searched (e.g., PsycINFO, ClinicalTrials.gov), including dates of coverage (i.e., earliest and latest records included in the search), and software and search platforms used
    * Names of specific journals that were searched and the volumes checked
    * Explanation of rationale for choosing reference lists if examined (e.g., other relevant articles, previous research
    syntheses)
    * Documents for which forward (citation) searches were conducted, stating why these documents were chosen
    * Number of researchers contacted if study authors or individual researchers were contacted to find studies or to obtain more information about included studies, as well as criteria for making contact (e.g., previous relevant publications), and response rate
    * Dates of contact if other direct contact searches were conducted such as contacting corporate sponsors or mailings to distribution lists
    * Search strategies in addition to those above and the results of these searches
    
    "))
    
    # produce table
    knitr::kable(table_sources) %>%
        kable_styling(fixed_thead = T, full_width = T) %>%
        column_spec(1, bold = T) %>%
        row_spec(0, background = "#ececec")
    Source Description
    PRISMA-P Not specified.
    PROSPERO Searches: State the sources that will be searched. Give the search dates, and any restrictions (e.g. language or publication period). Do NOT enter the full search strategy (it may be provided as a link or attachment).
    MARS

    Describe all information sources:

    • Databases searched (e.g., PsycINFO, ClinicalTrials.gov), including dates of coverage (i.e., earliest and latest records included in the search), and software and search platforms used
    • Names of specific journals that were searched and the volumes checked
    • Explanation of rationale for choosing reference lists if examined (e.g., other relevant articles, previous research syntheses)
    • Documents for which forward (citation) searches were conducted, stating why these documents were chosen
    • Number of researchers contacted if study authors or individual researchers were contacted to find studies or to obtain more information about included studies, as well as criteria for making contact (e.g., previous relevant publications), and response rate
    • Dates of contact if other direct contact searches were conducted such as contacting corporate sponsors or mailings to distribution lists
    • Search strategies in addition to those above and the results of these searches
    • specific target journals (check if indexed anyway) with search term
      • all target journals are indexed in one of the data bases (ERIC, PsycINFO or Web of Science)
    • data bases:
      • ERIC
      • PsycINFO
      • Web of Science
      • first 100 results from google scholar
    • backwards search from articles:
      • After screening abstracts and titles: We use the three latest synthesis articles (reviews or metaanalyses) and screen their references
    • preprint server
      • PsychArchives
      • PsyArXiv & SocArXiv

    3.3 Search Strategy

    # avoiding markdown tables because they're not exactly the prettiest flower in the bunch
    # set up the table
    table_sources <- data.frame(Source = c("PRISMA-P",     # first column will be always the same
                                           "PROSPERO", 
                                           "MARS",
                                           "Added by authors"),
                                Description = c(# PRISMA-P
                                                "Present draft of search strategy to be used for at least one electronic database, including planned limits, such that it could be repeated.",
                                                # PROSPERO
                                                "URL to search strategy: Give a link to a published pdf/word document detailing either the search strategy or an example of a search strategy for a specific database if available (including the keywords that will be used in the search strategies), or upload your search strategy. Do NOT provide links to your search results. Alternatively, upload your search strategy to CRD in pdf format. Please note that by doing so you are consenting to the file being made publicly accessible.",
                                                # MARS
                                                "Describe all information sources: Search strategies of electronic searches, such that they could be repeated (e.g., include the search terms used, Boolean connectors, fields searched, explosion of terms).",
                                                # added by authors
                                                "Checklist for Search Strategy: \"PRESS\" Peer Review of Electronic Search Strategies https://doi.org/10.1016/j.jclinepi.2016.01.021"))
    
    # produce table
    knitr::kable(table_sources) %>%
        kable_styling(fixed_thead = T, full_width = T) %>%
        column_spec(1, bold = T) %>%
        row_spec(0, background = "#ececec")
    Source Description
    PRISMA-P Present draft of search strategy to be used for at least one electronic database, including planned limits, such that it could be repeated.
    PROSPERO URL to search strategy: Give a link to a published pdf/word document detailing either the search strategy or an example of a search strategy for a specific database if available (including the keywords that will be used in the search strategies), or upload your search strategy. Do NOT provide links to your search results. Alternatively, upload your search strategy to CRD in pdf format. Please note that by doing so you are consenting to the file being made publicly accessible.
    MARS Describe all information sources: Search strategies of electronic searches, such that they could be repeated (e.g., include the search terms used, Boolean connectors, fields searched, explosion of terms).
    Added by authors Checklist for Search Strategy: “PRESS” Peer Review of Electronic Search Strategies https://doi.org/10.1016/j.jclinepi.2016.01.021

    Search String: PICO

    • Participants: NOT (peer OR medic* OR neural-network OR health* OR care*)
    • Intervention: ((computer* OR automat*) AND (writ* AND (argument* OR essay* OR summary OR exposit* OR expla*)) AND (feedback OR evaluat* OR assess* OR scor*)
    • Control: none
    • Outcome: none

    Timespan:
    2003 - 2020
    Around 2003 several programs were either established of got a major upgrade (see below). Before this timeframe, programs were rarely powered by machine learning (or related) algorythms. They had less or different range of functions and were thus different programs. We therefore choose this timeframe to make make analyses more comparable and reduce bias.

    Examples:

    • MyAccess: established in 2003
    • Project Essay Grade: Bought by Measurement, Inc. in 2002/2003 and subsequently utilized AI scoring engine
    • Citerion: major upgrade of scoring engine from e-rater V1.3 to e-rater V2.0 in 2004 (Attali & Burstein, 2004)
    • Summary Street: established between 2002 and 2004 (Wade-Stein & Kintsch, 2004)

    3.4 Data Management Tools Used

    # avoiding markdown tables because they're not exactly the prettiest flower in the bunch
    # set up the table
    table_sources <- data.frame(Source = c("PRISMA-P",     # first column will be always the same
                                           "PROSPERO", 
                                           "MARS"),
                                Description = c(# PRISMA-P
                                                "Describe the mechanism(s) that will be used to manage records and data throughout the review.",
                                                # PROSPERO
                                                "Not specified.",
                                                # MARS
                                                "Not specified."))
    
    # produce table
    knitr::kable(table_sources) %>%
        kable_styling(fixed_thead = T, full_width = T) %>%
        column_spec(1, bold = T) %>%
        row_spec(0, background = "#ececec")
    Source Description
    PRISMA-P Describe the mechanism(s) that will be used to manage records and data throughout the review.
    PROSPERO Not specified.
    MARS Not specified.
    • Mendeley (Reference Management) to collect and sort out the search results from the data bases
    • Rayyan QCRI (https://rayyan.qcri.org/welcome) to include or exclude articles by several raters

    3.5 Selection of Studies

    # avoiding markdown tables because they're not exactly the prettiest flower in the bunch
    # set up the table
    table_sources <- data.frame(Source = c("PRISMA-P",     # first column will be always the same
                                           "PROSPERO", 
                                           "MARS"),
                                Description = c(# PRISMA-P
                                                "State the process that will be used for selecting studies (such as two independent reviewers) through each phase of the review (that is, screening, eligibility and inclusion in meta-analysis).",
                                                # PROSPERO
                                                "Data extraction (selection and coding): Describe how studies will be selected for inclusion. State what data will be extracted or obtained. State how this will be done and recorded.",
                                                # MARS
                                                "Describe the process for deciding which studies would be included in the syntheses and/or included in the meta-analysis, including
    
    * Document elements (e.g., title, abstract, full text) used to make decisions about inclusion or exclusion from the synthesis at each step of the screening process 
    * Qualifications (e.g., training, educational or professional status) of those who conducted each step in the study selection process, stating whether each step was conducted by a single person or in duplicate as well as an explanation of how reliability was assessed if one screener was used and how disagreements were resolved if multiple were used."))
    
    # produce table
    knitr::kable(table_sources) %>%
        kable_styling(fixed_thead = T, full_width = T) %>%
        column_spec(1, bold = T) %>%
        row_spec(0, background = "#ececec")
    Source Description
    PRISMA-P State the process that will be used for selecting studies (such as two independent reviewers) through each phase of the review (that is, screening, eligibility and inclusion in meta-analysis).
    PROSPERO Data extraction (selection and coding): Describe how studies will be selected for inclusion. State what data will be extracted or obtained. State how this will be done and recorded.
    MARS

    Describe the process for deciding which studies would be included in the syntheses and/or included in the meta-analysis, including

    • Document elements (e.g., title, abstract, full text) used to make decisions about inclusion or exclusion from the synthesis at each step of the screening process
    • Qualifications (e.g., training, educational or professional status) of those who conducted each step in the study selection process, stating whether each step was conducted by a single person or in duplicate as well as an explanation of how reliability was assessed if one screener was used and how disagreements were resolved if multiple were used.

    Two independent reviewers conduct every of the following steps:

    Process of revison:

    1. screening titles and if we cannot make a statement based on the title we will screen the abstract
    2. screening all abstracts
    3. screening full texts

    At each of this three steps articles will be selected for inclusion if the inclusion criteria apply and none of the exclusivity criteria apply. If this is not the case articles will be excluded. If it is not clearly wheter articles should be included or excluded these studies will be marked as “maybe” and will be discussed with both reviewer together at the end. Articles will also be discussed together of both of the reviewers if there were disagreements. This disagreements were resolved by screening the full texts using the inclusion criteria.

    3.6 Method of Extracting Data & Information (from Reports)

    # avoiding markdown tables because they're not exactly the prettiest flower in the bunch
    # set up the table
    table_sources <- data.frame(Source = c("PRISMA-P",     # first column will be always the same
                                           "PROSPERO", 
                                           "MARS"),
                                Description = c(# PRISMA-P
                                                "Describe planned method of extracting data from reports (such as piloting forms, done independently, in duplicate), any processes for obtaining and confirming data from investigators.",
                                                # PROSPERO
                                                "Not specified.",
                                                # MARS
                                                "Describe methods of extracting data from reports, including 
    
    * Variables for which data were sought and the variable categories. 
    * Qualifications of those who conducted each step in the data extraction process, stating whether each step was conducted by a single person or in duplicate and an explanation of how reliability was assessed if one screener was used and how disagreements were resolved if multiple screeners were used as well as whether data coding forms, instructions for completion, and the data (including metadata) are available, stating where they can be found (e.g., public registry, supplemental materials)."))
    
    # produce table
    knitr::kable(table_sources) %>%
        kable_styling(fixed_thead = T, full_width = T) %>%
        column_spec(1, bold = T) %>%
        row_spec(0, background = "#ececec")
    Source Description
    PRISMA-P Describe planned method of extracting data from reports (such as piloting forms, done independently, in duplicate), any processes for obtaining and confirming data from investigators.
    PROSPERO Not specified.
    MARS

    Describe methods of extracting data from reports, including

    • Variables for which data were sought and the variable categories.
    • Qualifications of those who conducted each step in the data extraction process, stating whether each step was conducted by a single person or in duplicate and an explanation of how reliability was assessed if one screener was used and how disagreements were resolved if multiple screeners were used as well as whether data coding forms, instructions for completion, and the data (including metadata) are available, stating where they can be found (e.g., public registry, supplemental materials).
    • Studies will be coded in Rayyan as far as possible.
    • Beyond that: Both reviewers either use a standardized Excel form or a self-programmed dashboard that produces a relational database.
    • After data input both reviewers check for consistency and discuss discrepancies.

    3.7 List and Description of Data and Information Extracted

    # avoiding markdown tables because they're not exactly the prettiest flower in the bunch
    # set up the table
    table_sources <- data.frame(Source = c("PRISMA-P",     # first column will be always the same
                                           "PROSPERO", 
                                           "MARS"),
                                Description = c(# PRISMA-P
                                                "
    * List and define all variables for which data will be sought (such as PICO items, funding sources), any pre-planned data assumptions and simplifications
    * List and define all outcomes for which data will be sought, including prioritization of main and additional outcomes, with rationale",
                                                # PROSPERO
                                                "
    * Condition or domain being studied: Give a short description of the disease, condition or healthcare domain being studied. This could include health and wellbeing outcomes.
    * Participants/population: Give summary criteria for the participants or populations being studied by the review. The preferred format includes details of both inclusion and exclusion criteria.
    * Intervention(s), exposure(s): Give full and clear descriptions or definitions of the nature of the interventions or the exposures to be reviewed.
    * Comparator(s)/control: Where relevant, give details of the alternatives against which the main subject/topic of the review will be compared (e.g. another intervention or a non-exposed control group). The preferred format includes details of both inclusion and exclusion criteria.
    * Main and additional outcome(s): Give the pre-specified main (most important) outcomes of the review, including details of how the outcome is defined and measured and when these measurement are made, if these are part of the review inclusion criteria.
    * Measures of effect: Please specify the effect measure(s) for you main outcome(s) e.g. relative risks, odds ratios, risk difference,
    and/or 'number needed to treat.",
                                                # MARS
                                                "Not specified."))
    
    # produce table
    knitr::kable(table_sources) %>%
        kable_styling(fixed_thead = T, full_width = T) %>%
        column_spec(1, bold = T) %>%
        row_spec(0, background = "#ececec")
    Source Description
    PRISMA-P
    • List and define all variables for which data will be sought (such as PICO items, funding sources), any pre-planned data assumptions and simplifications
    • List and define all outcomes for which data will be sought, including prioritization of main and additional outcomes, with rationale
    PROSPERO
    • Condition or domain being studied: Give a short description of the disease, condition or healthcare domain being studied. This could include health and wellbeing outcomes.
  • Participants/population: Give summary criteria for the participants or populations being studied by the review. The preferred format includes details of both inclusion and exclusion criteria.
  • Intervention(s), exposure(s): Give full and clear descriptions or definitions of the nature of the interventions or the exposures to be reviewed.
  • Comparator(s)/control: Where relevant, give details of the alternatives against which the main subject/topic of the review will be compared (e.g. another intervention or a non-exposed control group). The preferred format includes details of both inclusion and exclusion criteria.
  • Main and additional outcome(s): Give the pre-specified main (most important) outcomes of the review, including details of how the outcome is defined and measured and when these measurement are made, if these are part of the review inclusion criteria.
  • Measures of effect: Please specify the effect measure(s) for you main outcome(s) e.g. relative risks, odds ratios, risk difference, and/or ’number needed to treat.
  • MARS Not specified.
    • publication status [scholarly published, grey literature]
    • visual presentation of feedback [grafical, text, mixed]
    • specifity [problem, solution, localization]
    • summarization
    • explanation
    • scope
    • content focus (success criteria) of feedback [lower order, higher order]
    • tool/software used
    • quantitative empirical studies
      • sample size
      • geographic location
      • condition characteristics
      • measured construct
      • effect size

    Download the Codebook here:

    xfun::embed_file('Codebook.pdf',
                     text = "Click here to download Codebook.")

    Click here to download Codebook.

    3.8 Effect Size Transformation from Individual Studies

    # avoiding markdown tables because they're not exactly the prettiest flower in the bunch
    # set up the table
    table_sources <- data.frame(Source = c("PRISMA-P",     # first column will be always the same
                                           "PROSPERO", 
                                           "MARS"),
                                Description = c(# PRISMA-P
                                                "Not specified.",
                                                # PROSPERO
                                                "Not specified.",
                                                # MARS
                                                "Describe the statistical methods for calculating effect sizes, including the metric(s) used (e.g., correlation coefficients, differences in means, risk ratios) and formula(s) used to calculate effect sizes."))
    
    # produce table
    knitr::kable(table_sources) %>%
        kable_styling(fixed_thead = T, full_width = T) %>%
        column_spec(1, bold = T) %>%
        row_spec(0, background = "#ececec")
    Source Description
    PRISMA-P Not specified.
    PROSPERO Not specified.
    MARS Describe the statistical methods for calculating effect sizes, including the metric(s) used (e.g., correlation coefficients, differences in means, risk ratios) and formula(s) used to calculate effect sizes.
    library(metafor)
    # fabricated example data to test functionality of code
    ex_df <- data.frame(study = c("Author 1 (2020)", "Author 2 (2019)", 
                                  "Author 3 (2019)", "Author 4 (2018)"),
                        m_tc = rnorm(4, 4, 1),    # mean treatment group
                        m_cc = rnorm(4, 3.5, 1),  # mean control group
                        sd_tc = rnorm(4, 1, .1),  # SD treatment group
                        sd_cc = rnorm(4, 1, .1),  # SD control group
                        n_tc = c(22, 25, 44, 32), # n treatment group
                        n_cc = c(22, 25, 40, 33)) # n control group
    
    # calculate effect sizes as SMD
    ex_df <- escalc(measure="SMD",          # standardized mean difference
                    n1i = n_tc, n2i = n_cc, # group sizes
                    m1i = m_tc, m2i = m_cc, # means
                    sd1i = sd_tc, sd2i = sd_cc, # standard deviations
                    data=ex_df)
    
    ex_df
    ##             study      m_tc     m_cc     sd_tc     sd_cc n_tc n_cc      yi 
    ## 1 Author 1 (2020) 2.8593425 2.730474 0.8118327 0.9479845   22   22  0.1434 
    ## 2 Author 2 (2019) 2.5098845 3.720494 0.9732726 0.9613695   25   25 -1.2318 
    ## 3 Author 3 (2019) 4.1319889 4.079114 0.9758181 0.9089996   44   40  0.0555 
    ## 4 Author 4 (2018) 0.9534369 2.858065 0.8929215 0.8140840   32   33 -2.2041 
    ##       vi 
    ## 1 0.0911 
    ## 2 0.0952 
    ## 3 0.0477 
    ## 4 0.0989

    3.9 Risk of Bias in Individual Studies

    # avoiding markdown tables because they're not exactly the prettiest flower in the bunch
    # set up the table
    table_sources <- data.frame(Source = c("PRISMA-P",     # first column will be always the same
                                           "PROSPERO", 
                                           "MARS",
                                           "Added by authors"),
                                Description = c(# PRISMA-P
                                                "Describe anticipated methods for assessing risk of bias of individual studies, including whether this will be done at the outcome or study level, or both; state how this information will be used in data synthesis.",
                                                # PROSPERO
                                                "Risk of bias (quality) assessment: Describe the method of assessing risk of bias or quality assessment. State which characteristics of the studies will be assessed and any formal risk of bias tools that will be used.",
                                                # MARS
                                                "Describe any methods used to assess risk to internal validity in individual study results, including
    
    * Risks assessed and criteria for concluding risk exists or does not exist.
    * Methods for including risk to internal validity in the decisions to synthesize of the data and the interpretation of results.",
                                                # added by authors
                                                "Describe how the quality of original studies are rated. E.g. by 'The Study Design and Implementation Assessment Device (Study DIAD)': https://doi.org/10.1037/1082-989X.13.2.130 "))
    
    # produce table
    knitr::kable(table_sources) %>%
        kable_styling(fixed_thead = T, full_width = T) %>%
        column_spec(1, bold = T) %>%
        row_spec(0, background = "#ececec")
    Source Description
    PRISMA-P Describe anticipated methods for assessing risk of bias of individual studies, including whether this will be done at the outcome or study level, or both; state how this information will be used in data synthesis.
    PROSPERO Risk of bias (quality) assessment: Describe the method of assessing risk of bias or quality assessment. State which characteristics of the studies will be assessed and any formal risk of bias tools that will be used.
    MARS

    Describe any methods used to assess risk to internal validity in individual study results, including

    • Risks assessed and criteria for concluding risk exists or does not exist.
    • Methods for including risk to internal validity in the decisions to synthesize of the data and the interpretation of results.
    Added by authors Describe how the quality of original studies are rated. E.g. by ‘The Study Design and Implementation Assessment Device (Study DIAD)’: https://doi.org/10.1037/1082-989X.13.2.130

    We will use a subset of the items/ dimensions from study DIAD (Valentine & Cooper, 2008).

    4 Results

    4.1 Strategy for Data Synthesis

    # avoiding markdown tables because they're not exactly the prettiest flower in the bunch
    # set up the table
    table_sources <- data.frame(Source = c("PRISMA-P",     # first column will be always the same
                                           "PROSPERO", 
                                           "MARS"),
                                Description = c(# PRISMA-P
                                                "
    * Describe criteria under which study data will be quantitatively synthesised.
    * If data are appropriate for quantitative synthesis, describe planned summary measures, methods of handling data and methods of combining data from studies, including any planned exploration of consistency (such as I2, Kendall’s τ).
    * If quantitative synthesis is not appropriate, describe the type of summary planned.",
                                                # PROSPERO
                                                "Strategy for data synthesis: Provide details of the planned synthesis including a rationale for the methods selected. This must not be generic text but should be specific to your review and describe how the proposed analysis will be applied to your data.",
                                                # MARS
                                                "Describe narrative and statistical methods used to compare studies. If meta-analysis was conducted, describe the methods used to combine effects across studies and the model used to estimate the heterogeneity of the effects sizes (e.g., a fixed-effect, random-effects model robust variance estimation), including
    
    * Rationale for the method of synthesis.
    * Methods for weighting study results.
    * Methods to estimate imprecision (e.g., confidence or credibility intervals) both within and between studies.
    * Description of all transformations or corrections (e.g., to account for small samples or unequal group numbers) and adjustments (e.g., for clustering, missing data, measurement artifacts, or construct-level relationships) made to the data and justification for these.
    * Additional analyses (e.g., subgroup analyses, meta-regression), including whether each analysis was prespecified or post hoc.
    * Selection of prior distributions and assessment of model fit if Bayesian analyses were conducted.
    * Name and version number of computer programs used for the analysis.
    * Statistical code and where it can be found (e.g., a supplement)."))
    
    # produce table
    knitr::kable(table_sources) %>%
        kable_styling(fixed_thead = T, full_width = T) %>%
        column_spec(1, bold = T) %>%
        row_spec(0, background = "#ececec")
    Source Description
    PRISMA-P
    • Describe criteria under which study data will be quantitatively synthesised.

    • If data are appropriate for quantitative synthesis, describe planned summary measures, methods of handling data and methods of combining data from studies, including any planned exploration of consistency (such as I2, Kendall’s τ).

    • If quantitative synthesis is not appropriate, describe the type of summary planned.

    PROSPERO

    Strategy for data synthesis: Provide details of the planned synthesis including a rationale for the methods selected. This must not be generic text but should be specific to your review and describe how the proposed analysis will be applied to your data.

    MARS

    Describe narrative and statistical methods used to compare studies. If meta-analysis was conducted, describe the methods used to combine effects across studies and the model used to estimate the heterogeneity of the effects sizes (e.g., a fixed-effect, random-effects model robust variance estimation), including

  • Rationale for the method of synthesis.

  • Methods for weighting study results.

  • Methods to estimate imprecision (e.g., confidence or credibility intervals) both within and between studies.

  • Description of all transformations or corrections (e.g., to account for small samples or unequal group numbers) and adjustments (e.g., for clustering, missing data, measurement artifacts, or construct-level relationships) made to the data and justification for these.

  • Additional analyses (e.g., subgroup analyses, meta-regression), including whether each analysis was prespecified or post hoc.

  • Selection of prior distributions and assessment of model fit if Bayesian analyses were conducted.

  • Name and version number of computer programs used for the analysis.

  • Statistical code and where it can be found (e.g., a supplement).

  • library(dmetar)
    library(tibble)
    library(plotly)
    
    # create empty data.frame to be filled
    results.power <- data.frame(k = numeric(),
                                d = numeric(),
                                power = numeric())
    
    for (k in 5:40) {                       # loop from k=5 to 40 studies
      for (d in c(.25, .30, .35)) {
        # compute power analyses for each k (random effects model)
        tmp <- power.analysis(d=d,     # expected effect size
                              k=k,     # expected number of studies
                              n1=25,   # expected mean group size in treatment group
                              n2=25,   # expected mean group size in control group
                              p=0.05,  # alpha level
                              heterogeneity = "moderate") # expected heterogeneity
        
        results.power <- add_row(results.power, 
                                 k = k, 
                                 d = d,
                                 power = tmp$Power)
      }
    }
    
    # plot results
    plot_ly(data = results.power, 
            x = ~k,
            y = ~power,
            hovertemplate = "<b>k:</b> %{x} <br /><b>power:</b> %{y}",
            type="scatter",
            color=~as.factor(d), 
            mode="lines+markers",
            split=~as.factor(d))  %>% 
       add_lines(x = c(5, 40), 
                 y = c(.8, .8), 
                 inherit = F, 
                 name = "80% power")

    The results indicate at least 80% power for 9, 12 or 17 included studies under different effect size assumptions. We take on a conservative approach assuming Cohen’s \(d=.25\) and thus will test the synthesized effect size when we are able to include 17 studies.

    # show data.frame
    results.power
    ##      k    d     power
    ## 1    5 0.25 0.3314842
    ## 2    5 0.30 0.4464151
    ## 3    5 0.35 0.5655469
    ## 4    6 0.25 0.3856321
    ## 5    6 0.30 0.5157014
    ## 6    6 0.35 0.6435011
    ## 7    7 0.25 0.4375575
    ## 8    7 0.30 0.5790673
    ## 9    7 0.35 0.7103232
    ## 10   8 0.25 0.4869204
    ## 11   8 0.30 0.6362987
    ## 12   8 0.35 0.7666693
    ## 13   9 0.25 0.5334934
    ## 14   9 0.30 0.6874399
    ## 15   9 0.35 0.8135264
    ## 16  10 0.25 0.5771421
    ## 17  10 0.30 0.7327163
    ## 18  10 0.35 0.8520299
    ## 19  11 0.25 0.6178080
    ## 20  11 0.30 0.7724747
    ## 21  11 0.35 0.8833411
    ## 22  12 0.25 0.6554939
    ## 23  12 0.30 0.8071352
    ## 24  12 0.35 0.9085706
    ## 25  13 0.25 0.6902505
    ## 26  13 0.30 0.8371559
    ## 27  13 0.35 0.9287337
    ## 28  14 0.25 0.7221656
    ## 29  14 0.30 0.8630056
    ## 30  14 0.35 0.9447297
    ## 31  15 0.25 0.7513544
    ## 32  15 0.30 0.8851455
    ## 33  15 0.35 0.9573354
    ## 34  16 0.25 0.7779518
    ## 35  16 0.30 0.9040157
    ## 36  16 0.35 0.9672092
    ## 37  17 0.25 0.8021056
    ## 38  17 0.30 0.9200272
    ## 39  17 0.35 0.9749003
    ## 40  18 0.25 0.8239713
    ## 41  18 0.30 0.9335569
    ## 42  18 0.35 0.9808606
    ## 43  19 0.25 0.8437078
    ## 44  19 0.30 0.9449457
    ## 45  19 0.35 0.9854577
    ## 46  20 0.25 0.8614735
    ## 47  20 0.30 0.9544982
    ## 48  20 0.35 0.9889878
    ## 49  21 0.25 0.8774243
    ## 50  21 0.30 0.9624839
    ## 51  21 0.35 0.9916876
    ## 52  22 0.25 0.8917112
    ## 53  22 0.30 0.9691388
    ## 54  22 0.35 0.9937443
    ## 55  23 0.25 0.9044786
    ## 56  23 0.30 0.9746685
    ## 57  23 0.35 0.9953056
    ## 58  24 0.25 0.9158639
    ## 59  24 0.30 0.9792506
    ## 60  24 0.35 0.9964867
    ## 61  25 0.25 0.9259959
    ## 62  25 0.30 0.9830375
    ## 63  25 0.35 0.9973774
    ## 64  26 0.25 0.9349955
    ## 65  26 0.30 0.9861596
    ## 66  26 0.35 0.9980472
    ## 67  27 0.25 0.9429745
    ## 68  27 0.30 0.9887275
    ## 69  27 0.35 0.9985493
    ## 70  28 0.25 0.9500365
    ## 71  28 0.30 0.9908350
    ## 72  28 0.35 0.9989247
    ## 73  29 0.25 0.9562765
    ## 74  29 0.30 0.9925609
    ## 75  29 0.35 0.9992047
    ## 76  30 0.25 0.9617814
    ## 77  30 0.30 0.9939714
    ## 78  30 0.35 0.9994129
    ## 79  31 0.25 0.9666306
    ## 80  31 0.30 0.9951221
    ## 81  31 0.35 0.9995675
    ## 82  32 0.25 0.9708959
    ## 83  32 0.30 0.9960589
    ## 84  32 0.35 0.9996820
    ## 85  33 0.25 0.9746425
    ## 86  33 0.30 0.9968204
    ## 87  33 0.35 0.9997666
    ## 88  34 0.25 0.9779291
    ## 89  34 0.30 0.9974383
    ## 90  34 0.35 0.9998290
    ## 91  35 0.25 0.9808085
    ## 92  35 0.30 0.9979389
    ## 93  35 0.35 0.9998749
    ## 94  36 0.25 0.9833280
    ## 95  36 0.30 0.9983438
    ## 96  36 0.35 0.9999087
    ## 97  37 0.25 0.9855300
    ## 98  37 0.30 0.9986708
    ## 99  37 0.35 0.9999334
    ## 100 38 0.25 0.9874524
    ## 101 38 0.30 0.9989345
    ## 102 38 0.35 0.9999515
    ## 103 39 0.25 0.9891288
    ## 104 39 0.30 0.9991469
    ## 105 39 0.35 0.9999648
    ## 106 40 0.25 0.9905891
    ## 107 40 0.30 0.9993177
    ## 108 40 0.35 0.9999744

    In case we are able to include enough studies we will calculate a random effects model. This is an example of the code with fabricated example data.

    # calculating random effects model
    rem <- rma(yi,           # effect sizes
               vi,           # variances
               data=ex_df)  
    summary(rem)
    ## 
    ## Random-Effects Model (k = 4; tau^2 estimator: REML)
    ## 
    ##   logLik  deviance       AIC       BIC      AICc 
    ##  -4.5992    9.1983   13.1983   11.3956   25.1983   
    ## 
    ## tau^2 (estimated amount of total heterogeneity): 1.1699 (SE = 1.0229)
    ## tau (square root of estimated tau^2 value):      1.0816
    ## I^2 (total heterogeneity / total variability):   93.65%
    ## H^2 (total variability / sampling variability):  15.76
    ## 
    ## Test for Heterogeneity:
    ## Q(df = 3) = 45.2501, p-val < .0001
    ## 
    ## Model Results:
    ## 
    ## estimate      se     zval    pval    ci.lb   ci.ub 
    ##  -0.7992  0.5596  -1.4280  0.1533  -1.8960  0.2977    
    ## 
    ## ---
    ## Signif. codes:  0 '***' 0.001 '**' 0.01 '*' 0.05 '.' 0.1 ' ' 1
    forest(rem)

    4.2 Moderators/ Subgroups

    # avoiding markdown tables because they're not exactly the prettiest flower in the bunch
    # set up the table
    table_sources <- data.frame(Source = c("PRISMA-P",     # first column will be always the same
                                           "PROSPERO", 
                                           "MARS"),
                                Description = c(# PRISMA-P
                                                "Describe any proposed additional analyses (such as sensitivity or subgroup analyses, meta-regression).",
                                                # PROSPERO
                                                "Analysis of subgroups or subsets: State any planned investigation of ‘subgroups’. Be clear and specific about which type of study or participant will be included in each group or covariate investigated. State the planned analytic approach.",
                                                # MARS
                                                "Not specified."))
    
    # produce table
    knitr::kable(table_sources) %>%
        kable_styling(fixed_thead = T, full_width = T) %>%
        column_spec(1, bold = T) %>%
        row_spec(0, background = "#ececec")
    Source Description
    PRISMA-P Describe any proposed additional analyses (such as sensitivity or subgroup analyses, meta-regression).
    PROSPERO Analysis of subgroups or subsets: State any planned investigation of ‘subgroups’. Be clear and specific about which type of study or participant will be included in each group or covariate investigated. State the planned analytic approach.
    MARS Not specified.

    main categorization

    • visual presentation of feedback [grafical, text, mixed]
    • specifity [problem, solution, localization]
    • summarization
    • explanation
    • scope
    • content focus (success criteria) of feedback [lower order, higher order]
    • tool/software used

    categorization for further description

    • Side of Recipients:
      • prerequisite of the sample/ target group (gender, age, grade, experience, motivation, writing ability, self concept)
      • process (cognitive load, understanding, editing time, text length)
      • implementation/ use of feedback
    • Side of the Agent
      • intention of use

    4.3 Assessment of Publication Bias

    # avoiding markdown tables because they're not exactly the prettiest flower in the bunch
    # set up the table
    table_sources <- data.frame(Source = c("PRISMA-P",     # first column will be always the same
                                           "PROSPERO", 
                                           "MARS"),
                                Description = c(# PRISMA-P
                                                "Specify any planned assessment of meta-bias(es) (such as publication bias across studies, selective reporting within studies)",
                                                # PROSPERO
                                                "Not specified.",
                                                # MARS
                                                "Describe risk of bias across studies, including
    
    * Statement about whether
       (a) unpublished studies and unreported data, or 
       (b) only published data were included in the synthesis and the rationale if only published data were used
    * Assessments of the impact of publication bias (e.g., modeling of data censoring, trim-and-fill analysis)
    * Results of any statistical analyses looking for selective reporting of results within studies"))
    
    # produce table
    knitr::kable(table_sources) %>%
        kable_styling(fixed_thead = T, full_width = T) %>%
        column_spec(1, bold = T) %>%
        row_spec(0, background = "#ececec")
    Source Description
    PRISMA-P Specify any planned assessment of meta-bias(es) (such as publication bias across studies, selective reporting within studies)
    PROSPERO Not specified.
    MARS

    Describe risk of bias across studies, including

    • Statement about whether
      1. unpublished studies and unreported data, or
      2. only published data were included in the synthesis and the rationale if only published data were used
    • Assessments of the impact of publication bias (e.g., modeling of data censoring, trim-and-fill analysis)
    • Results of any statistical analyses looking for selective reporting of results within studies

    File-drawer analyses: p-curves for the correlations
    Publication bias of single correlations:

    • Moderator effects of publication status (published vs. grey literature)
    • Fail-safe N analyses
    • Trim-and-fill analyses
    • Asymmetry tests of funnel plots

    5 Discussion

    5.1 Strength of Cumulative Evidence

    # avoiding markdown tables because they're not exactly the prettiest flower in the bunch
    # set up the table
    table_sources <- data.frame(Source = c("PRISMA-P",     # first column will be always the same
                                           "PROSPERO", 
                                           "MARS"),
                                Description = c(# PRISMA-P
                                                "Describe how the strength of the body of evidence will be assessed (such as [GRADE](https://www.gradeworkinggroup.org/)).",
                                                # PROSPERO
                                                "Not specified.",
                                                # MARS
                                                "Describe the generalizability (external validity) of conclusions, including • Implications for related populations, intervention variations, dependent (outcome) variables."))
    
    # produce table
    knitr::kable(table_sources) %>%
        kable_styling(fixed_thead = T, full_width = T) %>%
        column_spec(1, bold = T) %>%
        row_spec(0, background = "#ececec")
    Source Description
    PRISMA-P Describe how the strength of the body of evidence will be assessed (such as GRADE).
    PROSPERO Not specified.
    MARS Describe the generalizability (external validity) of conclusions, including • Implications for related populations, intervention variations, dependent (outcome) variables.

    We will examine the strength of our evidence by computing Cohen’s q. Additionally, we will evaluate the overall strength of evidence based on the GRADE framework (Grading of Recommendations Assessment, Development and Evaluation) and examine the following dimensions: overall risk of bias based on publication bias and quality of included studies, inconsistency of findings across studies (if findings across studies are consistent or not), indirectness (if participants of studies are part of the target population).